36 research outputs found
Collaborative Spectrum Sensing from Sparse Observations Using Matrix Completion for Cognitive Radio Networks
In cognitive radio, spectrum sensing is a key component to detect spectrum
holes (i.e., channels not used by any primary users). Collaborative spectrum
sensing among the cognitive radio nodes is expected to improve the ability of
checking complete spectrum usage states. Unfortunately, due to power limitation
and channel fading, available channel sensing information is far from being
sufficient to tell the unoccupied channels directly. Aiming at breaking this
bottleneck, we apply recent matrix completion techniques to greatly reduce the
sensing information needed. We formulate the collaborative sensing problem as a
matrix completion subproblem and a joint-sparsity reconstruction subproblem.
Results of numerical simulations that validated the effectiveness and
robustness of the proposed approach are presented. In particular, in noiseless
cases, when number of primary user is small, exact detection was obtained with
no more than 8% of the complete sensing information, whilst as number of
primary user increases, to achieve a detection rate of 95.55%, the required
information percentage was merely 16.8%
Collaborative Spectrum Sensing from Sparse Observations in Cognitive Radio Networks
Spectrum sensing, which aims at detecting spectrum holes, is the precondition
for the implementation of cognitive radio (CR). Collaborative spectrum sensing
among the cognitive radio nodes is expected to improve the ability of checking
complete spectrum usage. Due to hardware limitations, each cognitive radio node
can only sense a relatively narrow band of radio spectrum. Consequently, the
available channel sensing information is far from being sufficient for
precisely recognizing the wide range of unoccupied channels. Aiming at breaking
this bottleneck, we propose to apply matrix completion and joint sparsity
recovery to reduce sensing and transmitting requirements and improve sensing
results. Specifically, equipped with a frequency selective filter, each
cognitive radio node senses linear combinations of multiple channel information
and reports them to the fusion center, where occupied channels are then decoded
from the reports by using novel matrix completion and joint sparsity recovery
algorithms. As a result, the number of reports sent from the CRs to the fusion
center is significantly reduced. We propose two decoding approaches, one based
on matrix completion and the other based on joint sparsity recovery, both of
which allow exact recovery from incomplete reports. The numerical results
validate the effectiveness and robustness of our approaches. In particular, in
small-scale networks, the matrix completion approach achieves exact channel
detection with a number of samples no more than 50% of the number of channels
in the network, while joint sparsity recovery achieves similar performance in
large-scale networks.Comment: 12 pages, 11 figure
Guideline for Green Design of Printers and Multi-function Printers
This paper regulates the purposes and basic principles of green design of printers and multi-function printers, and provides the green design requirements and procedures of printers and multi-function printers. This paper will provide the guiding targets for the printer manufacturer, standardize the enterprise behaviour of printer, and lead the printer product enterprise to implement the product green design, which is of great significance in promoting the transformation and upgrading of printer product, enhancing the green manufacturing level and increasing the green product supply
Guidelines for Green Design of Televisions
This paper regulates the purposes and basic principles of green design of televisions, and provides the green design requirements and procedures of televisions. This paper will provide the guiding targets for the television manufacturer, standardize the enterprise behavior of televisions, and lead the television product enterprise to implement the product green design, which is of great significance in promoting the transformation and upgrading of television product, enhancing the green manufacturing level and increasing the green product supply
Guideline for Green Design of Computer Products
This paper regulates the purposes and basic principles of green design of computer products, and provides the green design requirements and procedures of computer products. This paper will provide the guiding targets for the computer product manufacturer, standardize the enterprise behaviour of computer product, and lead the computer product enterprise to implement the product green design, which is of great significance in promoting the transformation and upgrading of computer product, enhancing the green manufacturing level and increasing the green product supply
Flew Over Learning Trap: Learn Unlearnable Samples by Progressive Staged Training
Unlearning techniques are proposed to prevent third parties from exploiting
unauthorized data, which generate unlearnable samples by adding imperceptible
perturbations to data for public publishing. These unlearnable samples
effectively misguide model training to learn perturbation features but ignore
image semantic features. We make the in-depth analysis and observe that models
can learn both image features and perturbation features of unlearnable samples
at an early stage, but rapidly go to the overfitting stage since the shallow
layers tend to overfit on perturbation features and make models fall into
overfitting quickly. Based on the observations, we propose Progressive Staged
Training to effectively prevent models from overfitting in learning
perturbation features. We evaluated our method on multiple model architectures
over diverse datasets, e.g., CIFAR-10, CIFAR-100, and ImageNet-mini. Our method
circumvents the unlearnability of all state-of-the-art methods in the
literature and provides a reliable baseline for further evaluation of
unlearnable techniques
Monitoring of postoperative neutrophil-to-lymphocyte ratio, D-dimer, and CA153 in: Diagnostic value for recurrent and metastatic breast cancer
ObjectiveThis stydy aims to assess the value of monitoring of postoperative neutrophil-to-lymphocyte ratio (NLR), D-dimer, and carbohydrate antigen 153 (CA153) for diagnosis of breast cancer (BC) recurrence and metastasis.Materials/MethodsA cohort of 252 BC patients who underwent surgery at the First Affiliated Hospital of Anhui Medical University between August 2008 and August 2018 were enrolled in this retrospective study. All patients were examined during outpatient follow-ups every 3 months for 5 years postoperation and every 6 months thereafter. Recurrence or metastasis was recorded for 131 patients but not for the remaining 121. Retrospective analysis of hematological parameters and clinicopathological characteristics allowed comparison between the two groups and evaluation of these parameters for the recurrent and metastatic patients.ResultsLymph node metastasis, higher tumor node metastasis (TNM) staging, and higher histological grade correlated with BC recurrence and metastasis (p < 0.05). Statistical differences were found in absolute neutrophil count (ANC), absolute lymphocyte count (ALC), CEA, CA153, D-dimer, NLR, platelet-to-lymphocyte ratio (PLR), and monocyte-to-lymphocyte ratio (MLR) between the recurrent and metastatic and control groups (p < 0.05). Logistic regression analysis showed that CA153, D-dimer, NLR, and TNM staging were risk factors for BC recurrence and metastasis (p < 0.05). Combined values for the NLR, D-dimer, and CA153 had good diagnostic values, giving the highest area under the curve (AUC) of 0.913. High NLR, D-dimer, and CA153 values were significantly associated with recurrence and metastasis at multiple sites, lymph node metastasis, and higher TNM staging (p < 0.05). Patients with high CA153 were more likely to have bone metastases (p < 0.05), and those with high D-dimer were prone to lung metastasis (p < 0.05). With the increasing length of the postoperative period, the possibility of liver metastases gradually decreased, while that of chest wall recurrence gradually increased (p < 0.05).ConclusionMonitoring postoperative NLR, D-dimer, and CA153 is a convenient, practical method for diagnosing BC recurrence and metastasis. These metrics have good predictive value in terms of sites of recurrence and metastasis and the likelihood of multiple metastases
Pushing the Limits of Machine Design: Automated CPU Design with AI
Design activity -- constructing an artifact description satisfying given
goals and constraints -- distinguishes humanity from other animals and
traditional machines, and endowing machines with design abilities at the human
level or beyond has been a long-term pursuit. Though machines have already
demonstrated their abilities in designing new materials, proteins, and computer
programs with advanced artificial intelligence (AI) techniques, the search
space for designing such objects is relatively small, and thus, "Can machines
design like humans?" remains an open question. To explore the boundary of
machine design, here we present a new AI approach to automatically design a
central processing unit (CPU), the brain of a computer, and one of the world's
most intricate devices humanity have ever designed. This approach generates the
circuit logic, which is represented by a graph structure called Binary
Speculation Diagram (BSD), of the CPU design from only external input-output
observations instead of formal program code. During the generation of BSD,
Monte Carlo-based expansion and the distance of Boolean functions are used to
guarantee accuracy and efficiency, respectively. By efficiently exploring a
search space of unprecedented size 10^{10^{540}}, which is the largest one of
all machine-designed objects to our best knowledge, and thus pushing the limits
of machine design, our approach generates an industrial-scale RISC-V CPU within
only 5 hours. The taped-out CPU successfully runs the Linux operating system
and performs comparably against the human-designed Intel 80486SX CPU. In
addition to learning the world's first CPU only from input-output observations,
which may reform the semiconductor industry by significantly reducing the
design cycle, our approach even autonomously discovers human knowledge of the
von Neumann architecture.Comment: 28 page
Asymptotic Analysis of Large Cooperative Relay Networks Using Random Matrix Theory
Cooperative transmission is an emerging communication technology that takes advantage of the broadcast nature of wireless channels. In cooperative transmission, the use of relays can create a virtual antenna array so that multiple-input/multiple-output (MIMO) techniques can be employed. Most existing work in this area has focused on the situation in which there are a small number of sources and relays and a destination. In this paper, cooperative relay networks with large numbers of nodes are analyzed, and in particular the asymptotic performance improvement of cooperative transmission over direction transmission and relay transmission is analyzed using random matrix theory. The key idea is to investigate the eigenvalue distributions related to channel capacity and to analyze the moments of this distribution in large wireless networks. A performance upper bound is derived, the performance in the low signal-to-noise-ratio regime is analyzed, and two approximations are obtained for high and low relay-to-destination link qualities, respectively. Finally, simulations are provided to validate the accuracy of the analytical results. The analysis in this paper provides important tools for the understanding and the design of large cooperative wireless networks